38 research outputs found

    Information Gathering with Peers: Submodular Optimization with Peer-Prediction Constraints

    Full text link
    We study a problem of optimal information gathering from multiple data providers that need to be incentivized to provide accurate information. This problem arises in many real world applications that rely on crowdsourced data sets, but where the process of obtaining data is costly. A notable example of such a scenario is crowd sensing. To this end, we formulate the problem of optimal information gathering as maximization of a submodular function under a budget constraint, where the budget represents the total expected payment to data providers. Contrary to the existing approaches, we base our payments on incentives for accuracy and truthfulness, in particular, {\em peer-prediction} methods that score each of the selected data providers against its best peer, while ensuring that the minimum expected payment is above a given threshold. We first show that the problem at hand is hard to approximate within a constant factor that is not dependent on the properties of the payment function. However, for given topological and analytical properties of the instance, we construct two greedy algorithms, respectively called PPCGreedy and PPCGreedyIter, and establish theoretical bounds on their performance w.r.t. the optimal solution. Finally, we evaluate our methods using a realistic crowd sensing testbed.Comment: Longer version of AAAI'18 pape

    Bayesian fairness

    Get PDF
    We consider the problem of how decision making can be fair when the underlying probabilistic model of the world is not known with certainty. We argue that recent notions of fairness in machine learning need to explicitly incorporate parameter uncertainty, hence we introduce the notion of {\em Bayesian fairness} as a suitable candidate for fair decision rules. Using balance, a definition of fairness introduced by Kleinberg et al (2016), we show how a Bayesian perspective can lead to well-performing, fair decision rules even under high uncertainty.Comment: 13 pages, 8 figures, to appear at AAAI 201

    Elicitation and Aggregation of Crowd Information

    Get PDF
    This thesis addresses challenges in elicitation and aggregation of crowd information for settings where an information collector, called center, has a limited knowledge about information providers, called agents. Each agent is assumed to have noisy private information that brings a high information gain to the center when it is aggregated with the private information of other agents. We address two particular issues in eliciting crowd information: 1) how to incentivize agents to participate and provide accurate data; 2) how to aggregate crowd information so that the negative impact of agents who provide low quality information is bounded. We examine three different information elicitation settings. In the first elicitation setting, agents report their observations regarding a single phenomenon that represents an abstraction of a crowdsourcing task. The center itself does not observe the phenomenon, so it rewards agents by comparing their reports. Clearly, a rational agent bases her reporting strategy on what she believes about other agents, called peers. We prove that, in general, no payment mechanism can achieve strict properness (i.e., adopt truthful reporting as a strict equilibrium strategy) if agents only report their observations, even if they share a common belief system. This motivates the use of payment mechanisms that are based on an additional report. We show that a general payment mechanism cannot have a simple structure, often adopted by prior work, and that in the limit case, when observations can take real values, agents are constrained to share a common belief system. Furthermore, we develop several payment mechanisms for the elicitation of non-binary observations. In the second elicitation setting, a group of agents observes multiple a priori similar phenomena. Due to the a priori similarity condition, the setting represents a refinement of the former setting and enables one to achieve stronger incentive properties without requiring additional reports or constraining agents to share a common belief system. We extend the existing mechanisms to allow non-binary observations by constructing strongly truthful mechanisms (i.e., mechanisms in which truthful reporting is the highest-paying equilibrium) for different types of agents' population. In the third elicitation setting, agents observe a time evolving phenomenon, and a few of them, whose identity is known, are trusted to report truthful observations. The existence of trusted agents makes this setting much more stringent than the previous ones. We show that, in the context of online information aggregation, one can not only incentivize agents to provide informative reports, but also limit the effectiveness of malicious agents who deliberately misreport. To do so, we construct a reputation system that puts a bound on the negative impact that any misreporting strategy can have on the learned aggregate. Finally, we experimentally verify the effectiveness of novel elicitation mechanisms in community sensing simulation testbeds and a peer grading experiment

    Calibrated Fairness in Bandits

    Get PDF
    We study fairness within the stochastic, \emph{multi-armed bandit} (MAB) decision making framework. We adapt the fairness framework of "treating similar individuals similarly" to this setting. Here, an `individual' corresponds to an arm and two arms are `similar' if they have a similar quality distribution. First, we adopt a {\em smoothness constraint} that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we define the {\em fairness regret}, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on Thompson sampling satisfies smooth fairness for total variation distance, and give an O~((kT)2/3)\tilde{O}((kT)^{2/3}) bound on fairness regret. This complements prior work, which protects an on-average better arm from being less favored. We also explain how to extend our algorithm to the dueling bandit setting.Comment: To be presented at the FAT-ML'17 worksho

    How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness

    Full text link
    What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across two online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race of the loan applicants). Overall, one definition (calibrated fairness) tends to be more preferred than the others, and the results also provide support for the principle of affirmative action.Comment: To appear at AI Ethics and Society (AIES) 201

    Learning Embeddings for Sequential Tasks Using Population of Agents

    Full text link
    We present an information-theoretic framework to learn fixed-dimensional embeddings for tasks in reinforcement learning. We leverage the idea that two tasks are similar to each other if observing an agent's performance on one task reduces our uncertainty about its performance on the other. This intuition is captured by our information-theoretic criterion which uses a diverse population of agents to measure similarity between tasks in sequential decision-making settings. In addition to qualitative assessment, we empirically demonstrate the effectiveness of our techniques based on task embeddings by quantitative comparisons against strong baselines on two application scenarios: predicting an agent's performance on a test task by observing its performance on a small quiz of tasks, and selecting tasks with desired characteristics from a given set of options

    Incentives for Subjective Evaluations with Private Beliefs

    Get PDF
    The modern web critically depends on aggregation of information from self-interested agents, for example opinion polls, product ratings, or crowdsourcing. We consider a setting where multiple objects (questions, products, tasks) are evaluated by a group of agents. We first construct a minimal peer prediction mechanism that elicits honest evaluations from a homogeneous population of agents with different private beliefs. Second, we show that it is impossible to strictly elicit honest evaluations from a heterogeneous group of agents with different private beliefs. Nevertheless, we provide a modified version of a divergence-based Bayesian Truth Serum that incentivizes agents to report consistently, making truthful reporting a weak equilibrium of the mechanism
    corecore